32 research outputs found

    Increasing service visibility for future, softwarised air traffic management data networks

    Get PDF
    Air Traffic Management (ATM) is at an exciting frontier. The volume of air traffic is reaching the safe limits of current infrastructure. Yet, demand for more air traffic continues. To meet capacity demands, ATM data networks are increasing in complexity with: greater infrastructure integration, higher availability and precision of services; and the introduction of unmanned systems. Official recommendations into previous disruptive outages have high-lighted the need for operators to have richer monitoring capabilities and operational systems visibility, on-demand, in response to challenges. The work presented in this thesis, helps ATM operators better understand and increase visibility into the behaviour of their services and infrastructure, with the primary aim to inform decision-making to reduce service disruption. This is achieved by combining a container-based NFV framework with Software- Defined Networking (SDN). The application of SDN+NFV in this work allows lightweight, chain-able monitoring and anomaly detection functions to be deployed on-demand, and the appropriate (sub)set of network traffic routed through these virtual network functions to provide timely, context-specific information. This container-based function deployment architecture, allows for punctual in-network processing through the instantiation of custom functionality, at appropriate locations. When accidents do occur, such as the crash of a UAV, the lessons learnt should be integrated into future systems. For one such incident, the accident investigation identified a telemetry precursor an hour prior. The function deployment architecture allows operators to extend and adapt their network infrastructure, to incorporate the latest monitoring recommendations. Furthermore, this work has examined relationships in application-level information and network layer data representing individual examples of a wide range of generalisable cases including: between the cyber and physical components of surveillance data, the rate of change in telemetry to determine abnormal aircraft surface movements, and the emerging behaviour of network flooding. Each of these examples provide valuable context-specific benefits to operators and a generalised basis from which further tools can be developed to enhance their understanding of their networks

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    AusTraits: a curated plant trait database for the Australian flora

    No full text
    INTRODUCTION AusTraits is a transformative database, containing measurements on the traits of Australia’s plant taxa, standardised from hundreds of disconnected primary sources. So far, data have been assembled from > 250 distinct sources, describing > 400 plant traits and > 26,000 taxa. To handle the harmonising of diverse data sources, we use a reproducible workflow to implement the various changes required for each source to reformat it suitable for incorporation in AusTraits. Such changes include restructuring datasets, renaming variables, changing variable units, changing taxon names. While this repository contains the harmonised data, the raw data and code used to build the resource are also available on the project’s GitHub repository, http://traitecoevo.github.io/austraits.build/. Further information on the project is available in the associated publication and at the project website austraits.org. Falster, Gallagher et al (2021) AusTraits, a curated plant trait database for the Australian flora. Scientific Data 8: 254, https://doi.org/10.1038/s41597-021-01006-6 CONTRIBUTORS The project is jointly led by Dr Daniel Falster (UNSW Sydney), Dr Rachael Gallagher (Western Sydney University), Dr Elizabeth Wenk (UNSW Sydney), and Dr Hervé Sauquet (Royal Botanic Gardens and Domain Trust Sydney), with input from > 300 contributors from over > 100 institutions (see full list above). The project was initiated by Dr Rachael Gallagher and Prof Ian Wright while at Macquarie University. We are grateful to the following institutions for contributing data Australian National Botanic Garden, Brisbane Rainforest Action and Information Network, Kew Botanic Gardens, National Herbarium of NSW, Northern Territory Herbarium, Queensland Herbarium, Western Australian Herbarium, South Australian Herbarium, State Herbarium of South Australia, Tasmanian Herbarium, Department of Environment, Land, Water and Planning, Victoria. AusTraits has been supported by investment from the Australian Research Data Commons (ARDC), via their “Transformative data collections” (https://doi.org/10.47486/TD044) and “Data Partnerships” (https://doi.org/10.47486/DP720) programs; fellowship grants from Australian Research Council to Falster (FT160100113), Gallagher (DE170100208) and Wright (FT100100910), a grant from Macquarie University to Gallagher. The ARDC is enabled by National Collaborative Research Investment Strategy (NCRIS). ACCESSING AND USE OF DATA The compiled AusTraits database is released under an open source licence (CC-BY), enabling re-use by the community. A requirement of use is that users cite the AusTraits resource paper, which includes all contributors as co-authors: Falster, Gallagher et al (2021) AusTraits, a curated plant trait database for the Australian flora. Scientific Data 8: 254, https://doi.org/10.1038/s41597-021-01006-6 In addition, we encourage users you to cite the original data sources, wherever possible. Note that under the license data may be redistributed, provided the attribution is maintained. The downloads below provide the data in two formats: austraits-3.0.2.zip: data in plain text format (.csv, .bib, .yml files). Suitable for anyone, including those using Python. austraits-3.0.2.rds: data as compressed R object. Suitable for users of R (see below). Both objects contain all the data and relevant meta-data. AUSTRAITS R PACKAGE For R users, access and manipulation of data is assisted with the austraits R package. The package can both download data and provides examples and functions for running queries. STRUCTURE OF AUSTRAITS The compiled AusTraits database has the following main components: austraits ├── traits ├── sites ├── contexts ├── methods ├── excluded_data ├── taxanomic_updates ├── taxa ├── definitions ├── contributors ├── sources └── build_info These elements include all the data and contextual information submitted with each contributed datasets. A schema and definitions for the database are given in the file/component definitions, available within the download. The file dictionary.html provides the same information in textual format. Full details on each of these components and columns are contained within the definition. Similar information is available at http://traitecoevo.github.io/austraits.build/articles/Trait_definitions.html and http://traitecoevo.github.io/austraits.build/articles/austraits_database_structure.html. CONTRIBUTING We envision AusTraits as an on-going collaborative community resource that: Increases our collective understanding the Australian flora; and Facilitates accumulation and sharing of trait data; Builds a sense of community among contributors and users; and Aspires to fully transparent and reproducible research of the highest standard. As a community resource, we are very keen for people to contribute. Assembly of the database is managed on GitHub at traitecoevo/austraits.build. Here are some of the ways you can contribute: Reporting Errors: If you notice a possible error in AusTraits, please post an issue on GitHub. Refining documentation: We welcome additions and edits that make using the existing data or adding new data easier for the community. Contributing new data: We gladly accept new data contributions to AusTraits. See full instructions on how to contribute at http://traitecoevo.github.io/austraits.build/articles/contributing_data.html

    AusTraits, a curated plant trait database for the Australian flora

    Get PDF
    International audienceWe introduce the austraits database-a compilation of values of plant traits for taxa in the Australian flora (hereafter AusTraits). AusTraits synthesises data on 448 traits across 28,640 taxa from field campaigns, published literature, taxonomic monographs, and individual taxon descriptions. Traits vary in scope from physiological measures of performance (e.g. photosynthetic gas exchange, water-use efficiency) to morphological attributes (e.g. leaf area, seed mass, plant height) which link to aspects of ecological variation. AusTraits contains curated and harmonised individual-and species-level measurements coupled to, where available, contextual information on site properties and experimental conditions. This article provides information on version 3.0.2 of AusTraits which contains data for 997,808 trait-by-taxon combinations. We envision AusTraits as an ongoing collaborative initiative for easily archiving and sharing trait data, which also provides a template for other national or regional initiatives globally to fill persistent gaps in trait knowledge

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems that facilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment.This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing \sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore